Goto

Collaborating Authors

 xai model


Explainable Artificial Intelligence for Drug Discovery and Development -- A Comprehensive Survey

Alizadehsani, Roohallah, Oyelere, Solomon Sunday, Hussain, Sadiq, Calixto, Rene Ripardo, de Albuquerque, Victor Hugo C., Roshanzamir, Mohamad, Rahouti, Mohamed, Jagatheesaperumal, Senthil Kumar

arXiv.org Artificial Intelligence

The field of drug discovery has experienced a remarkable transformation with the advent of artificial intelligence (AI) and machine learning (ML) technologies. However, as these AI and ML models are becoming more complex, there is a growing need for transparency and interpretability of the models. Explainable Artificial Intelligence (XAI) is a novel approach that addresses this issue and provides a more interpretable understanding of the predictions made by machine learning models. In recent years, there has been an increasing interest in the application of XAI techniques to drug discovery. This review article provides a comprehensive overview of the current state-of-the-art in XAI for drug discovery, including various XAI methods, their application in drug discovery, and the challenges and limitations of XAI techniques in drug discovery. The article also covers the application of XAI in drug discovery, including target identification, compound design, and toxicity prediction. Furthermore, the article suggests potential future research directions for the application of XAI in drug discovery. The aim of this review article is to provide a comprehensive understanding of the current state of XAI in drug discovery and its potential to transform the field.


An Autoethnographic Exploration of XAI in Algorithmic Composition

Noel-Hirst, Ashley, Bryan-Kinns, Nick

arXiv.org Artificial Intelligence

Machine Learning models are capable of generating complex music across a range of genres from folk to classical music. However, current generative music AI models are typically difficult to understand and control in meaningful ways. Whilst research has started to explore how explainable AI (XAI) generative models might be created for music, no generative XAI models have been studied in music making practice. This paper introduces an autoethnographic study of the use of the MeasureVAE generative music XAI model with interpretable latent dimensions trained on Irish folk music. Findings suggest that the exploratory nature of the music-making workflow foregrounds musical features of the training dataset rather than features of the generative model itself. The appropriation of an XAI model within an iterative workflow highlights the potential of XAI models to form part of a richer and more complex workflow than they were initially designed for.


Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model

Onari, Mohsen Abbaspour, Grau, Isel, Nobile, Marco S., Zhang, Yingqian

arXiv.org Artificial Intelligence

This empirical study proposes a novel methodology to measure users' perceived trust in an Explainable Artificial Intelligence (XAI) model. To do so, users' mental models are elicited using Fuzzy Cognitive Maps (FCMs). First, we exploit an interpretable Machine Learning (ML) model to classify suspected COVID-19 patients into positive or negative cases. Then, Medical Experts' (MEs) conduct a diagnostic decision-making task based on their knowledge and then prediction and interpretations provided by the XAI model. In order to evaluate the impact of interpretations on perceived trust, explanation satisfaction attributes are rated by MEs through a survey. Then, they are considered as FCM's concepts to determine their influences on each other and, ultimately, on the perceived trust. Moreover, to consider MEs' mental subjectivity, fuzzy linguistic variables are used to determine the strength of influences. After reaching the steady state of FCMs, a quantified value is obtained to measure the perceived trust of each ME. The results show that the quantified values can determine whether MEs trust or distrust the XAI model. We analyze this behavior by comparing the quantified values with MEs' performance in completing diagnostic tasks.


A Survey on Explainable Artificial Intelligence for Cybersecurity

Rjoub, Gaith, Bentahar, Jamal, Wahab, Omar Abdel, Mizouni, Rabeb, Song, Alyssa, Cohen, Robin, Otrok, Hadi, Mourad, Azzam

arXiv.org Artificial Intelligence

The black-box nature of artificial intelligence (AI) models has been the source of many concerns in their use for critical applications. Explainable Artificial Intelligence (XAI) is a rapidly growing research field that aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions. In the field of network cybersecurity, XAI has the potential to revolutionize the way we approach network security by enabling us to better understand the behavior of cyber threats and to design more effective defenses. In this survey, we review the state of the art in XAI for cybersecurity in network systems and explore the various approaches that have been proposed to address this important problem. The review follows a systematic classification of network-driven cybersecurity threats and issues. We discuss the challenges and limitations of current XAI methods in the context of cybersecurity and outline promising directions for future research.


XAI Renaissance: Redefining Interpretability in Medical Diagnostic Models

Mandala, Sujith K

arXiv.org Artificial Intelligence

As machine learning models become increasingly prevalent in medical diagnostics, the need for interpretability and transparency becomes paramount. The XAI Renaissance signifies a significant shift in the field, aiming to redefine the interpretability of medical diagnostic models. This paper explores the innovative approaches and methodologies within the realm of Explainable AI (XAI) that are revolutionizing the interpretability of medical diagnostic models. By shedding light on the underlying decision-making process, XAI techniques empower healthcare professionals to understand, trust, and effectively utilize these models for accurate and reliable medical diagnoses. This review highlights the key advancements in XAI for medical diagnostics and their potential to transform the healthcare landscape, ultimately improving patient outcomes and fostering trust in AI-driven diagnostic systems.


Explainable Activity Recognition for Smart Home Systems

Das, Devleena, Nishimura, Yasutaka, Vivek, Rajan P., Takeda, Naoto, Fish, Sean T., Ploetz, Thomas, Chernova, Sonia

arXiv.org Artificial Intelligence

Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate and therefore inconsistencies in smart home operations can lead users reliant on smart home predictions to wonder "why did the smart home do that?" In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques and introduce an explainable activity recognition framework in which we leverage leading XAI methods to generate natural language explanations that explain what about an activity led to the given classification. Within the context of remote caregiver monitoring, we perform a two-step evaluation: (a) utilize ML experts to assess the sensibility of explanations, and (b) recruit non-experts in two user remote caregiver monitoring scenarios, synchronous and asynchronous, to assess the effectiveness of explanations generated via our framework. Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations. Moreover, in 83% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model. We make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation, and discuss a range of topics for future work to further improve explainable activity recognition.


Context, Utility and Influence of an Explanation

Patil, Minal Suresh, Främling, Kary

arXiv.org Artificial Intelligence

Contextual utility theory integrates context-sensitive factors into utility-based decision-making models. It stresses the importance of understanding individual decision-makers' preferences, values, and beliefs and the situational factors that affect them. Contextual utility theory benefits explainable AI. First, it can improve transparency and understanding of how AI systems affect decision-making. It can reveal AI model biases and limitations by considering personal preferences and context. Second, contextual utility theory can make AI systems more personalized and adaptable to users and stakeholders. AI systems can better meet user needs and values by incorporating demographic and cultural data. Finally, contextual utility theory promotes ethical AI development and social responsibility. AI developers can create ethical systems that benefit society by considering contextual factors like societal norms and values. This work, demonstrates how contextual utility theory can improve AI system transparency, personalization, and ethics, benefiting both users and developers.


Explainable AI over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions

Jagatheesaperumal, Senthil Kumar, Pham, Quoc-Viet, Ruby, Rukhsana, Yang, Zhaohui, Xu, Chunmei, Zhang, Zhaoyang

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.


AI's Mystery and Its Demystification

#artificialintelligence

The term'Artificial Intelligence' was originally coined in the year 1955 by John McCarthy, who is known as the Father of Artificial Intelligence. Artificial Intelligence is abbreviated as AI. Although AI was coined in the mid 1950's, many people assume that it is a recent development. Work on AI started way back and continues to be one of the top researched areas even today. AI is predicted to even perform surgeries on its own by the year 2048. So, what exactly is AI? Artificial Intelligence is the effort to simulate human intelligence by the use of machines. Looking at the history of computers and its associated technologies, the initial concept was to make machines that perform a specified work so as to enhance accuracy, increase efficiency and cut down on human-prone errors.


The Importance of Explainable AI - Insurance Thought Leadership

#artificialintelligence

Explainable AI can help decision-makers in insurance understand the rationale and logic behind AI and machine learning results. "Most businesses believe that machine learning models are opaque and non-intuitive and no information is provided regarding their decision-making and predictions," -- Swathi Young, host at Women in AI. Explainable AI is evolving to give meaning to artificial intelligence and machine learning in insurance. The XAI (explainable AI) model has the key factors, which are explained in the passed and not passed cases. The features that are extracted from the insurance customer profile and the accident image are highlighted in the XAI model.